116 research outputs found

    Abstraction and Learning for Infinite-State Compositional Verification

    Full text link
    Despite many advances that enable the application of model checking techniques to the verification of large systems, the state-explosion problem remains the main challenge for scalability. Compositional verification addresses this challenge by decomposing the verification of a large system into the verification of its components. Recent techniques use learning-based approaches to automate compositional verification based on the assume-guarantee style reasoning. However, these techniques are only applicable to finite-state systems. In this work, we propose a new framework that interleaves abstraction and learning to perform automated compositional verification of infinite-state systems. We also discuss the role of learning and abstraction in the related context of interface generation for infinite-state components.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Work-in-progress Assume-guarantee reasoning with ioco

    Get PDF
    This paper presents a combination between the assume-guarantee paradigm and the testing relation ioco. The assume-guarantee paradigm is a ”divide and conquer” technique that decomposes the verification of a system into smaller tasks that involve the verification of its components. The principal aspect of assume-guarantee reasoning is to consider each component separately, while taking into account assumptions about the context of the component. The testing relation ioco is a formal conformance relation for model-based testing that works on labeled transition systems. Our main result shows that, with certain restrictions, assume-guarantee reasoning can be applied in the context of ioco. This enables testing ioco-conformance of a system by testing its components separately

    Improving Trust in Deep Neural Networks with Nearest Neighbors

    Get PDF
    Deep neural networks are used increasingly for perception and decision-making in UAVs. For example, they can be used to recognize objects from images and decide what actions the vehicle should take. While deep neural networks can perform very well at complex tasks, their decisions may be unintuitive to a human operator. When a human disagrees with a neural network prediction, due to the black box nature of deep neural networks, it can be unclear whether the system knows something the human does not or whether the system is malfunctioning. This uncertainty is problematic when it comes to ensuring safety. As a result, it is important to develop technologies for explaining neural network decisions for trust and safety. This paper explores a modification to the deep neural network classification layer to produce both a predicted label and an explanation to support its prediction. Specifically, at test time, we replace the final output layer of the neural network classifier by a k-nearest neighbor classifier. The nearest neighbor classifier produces 1) a predicted label through voting and 2) the nearest neighbors involved in the prediction, which represent the most similar examples from the training dataset. Because prediction and explanation are derived from the same underlying process, this approach guarantees that the explanations are always relevant to the predictions. We demonstrate the approach on a convolutional neural network for a UAV image classification task. We perform experiments using a forest trail image dataset and show empirically that the hybrid classifier can produce intuitive explanations without loss of predictive performance compared to the original neural network. We also show how the approach can be used to help identify potential issues in the network and training process

    Automated Test Case Generation for an Autopilot Requirement Prototype

    Get PDF
    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain

    Towards a Compositional SPIN

    Get PDF
    This paper discusses our initial experience with introducing automated assume-guarantee verification based on learning in the SPIN tool. We believe that compositional verification techniques such as assume-guarantee reasoning could complement the state-reduction techniques that SPIN already supports, thus increasing the size of systems that SPIN can handle. We present a "light-weight" approach to evaluating the benefits of learning-based assume-guarantee reasoning in the context of SPIN: we turn our previous implementation of learning for the LTSA tool into a main program that externally invokes SPIN to provide the model checking-related answers. Despite its performance overheads (which mandate a future implementation within SPIN itself), this approach provides accurate information about the savings in memory. We have experimented with several versions of learning-based assume guarantee reasoning, including a novel heuristic introduced here for generating component assumptions when their environment is unavailable. We illustrate the benefits of learning-based assume-guarantee reasoning in SPIN through the example of a resource arbiter for a spacecraft. Keywords: assume-guarantee reasoning, model checking, learning

    thematic series on verification and composition for the internet of services and things

    Get PDF
    Abstract ■■■ The Internet of Services and Things is characterized as a distributed computing environment that will be populated by a large number of software services and things. Within this context, software systems will increasingly be built by reusing and composing together software services and things distributed over the Internet. This calls for new integration paradigms and patterns, formal composition theories, integration architectures, as well as flexible and dynamic composition and verification mechanisms. In particular, service- and thing-based systems pose new challenges for software composition and verification techniques, due to changing requirements, emerging behaviors, uncertainty, and dynamicity

    State Event Models for the Formal Analysis of Human-Machine Interactions

    Get PDF
    The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework

    Assume-Guarantee Testing

    Get PDF
    Verification techniques for component-based systems should ideally be able to predict properties of the assembled system through analysis of individual components before assembly. This work introduces such a modular technique in the context of testing. Assume-guarantee testing relies on the (automated) decomposition of key system-level requirements into local component requirements at design time. Developers can verify the local requirements by checking components in isolation; failed checks may indicate violations of system requirements, while valid traces from different components compose via the assume-guarantee proof rule to potentially provide system coverage. These local requirements also form the foundation of a technique for efficient predictive testing of assembled systems: given a correct system run, this technique can predict violations by alternative system runs without constructing those runs. We discuss the application of our approach to testing a multi-threaded NASA application, where we treat threads as components
    corecore